|
In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method and is also known as the inverse power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics. 〔Ernst Pohlhausen, ''Berechnung der Eigenschwingungen statisch-bestimmter Fachwerke'', ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik 1, 28-42 (1921).〕 The inverse power iteration algorithm starts with an approximation for the eigenvalue corresponding to the desired eigenvector and a vector ''b''0, either a randomly selected vector or an approximation to the eigenvector. The method is described by the iteration : where ''Ck'' are some constants usually chosen as Since eigenvectors are defined up to multiplication by constant, the choice of ''Ck'' can be arbitrary in theory; practical aspects of the choice of are discussed below. At every iteration, the vector ''b''''k'' is multiplied by the inverse of the matrix and normalized. It is exactly the same formula as in the power method, except replacing the matrix ''A'' by The closer the approximation to the eigenvalue is chosen, the faster the algorithm converges; however, incorrect choice of can lead to slow convergence or to the convergence to an eigenvector other than the one desired. In practice, the method is used when a good approximation for the eigenvalue is known, and hence one needs only few (quite often just one) iterations. == Theory and convergence == The basic idea of the power iteration is choosing an initial vector ''b'' (either an eigenvector approximation or a random vector) and iteratively calculating . Except for a set of zero measure, for any initial vector, the result will converge to an eigenvector corresponding to the dominant eigenvalue. The inverse iteration does the same for the matrix , so it converges to eigenvector corresponding to the dominant eigenvalue of the matrix . Eigenvalues of this matrix are where are eigenvalues of ''A''. The largest of these numbers corresponds to the smallest of It is obvious that the eigenvectors of ''A'' and of are the same. Conclusion: The method converges to the eigenvector of the matrix ''A'' corresponding to the closest eigenvalue to In particular taking we see that converges to the eigenvector corresponding to the eigenvalue of ''A'' with the smallest absolute value. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Inverse iteration」の詳細全文を読む スポンサード リンク
|